Simple Principles of Metalearning
نویسندگان
چکیده
The goal of metalearning is to generate useful shifts of inductive bias by adapting the current learning strategy in a \useful" way. Our learner leads a single life during which actions are continually executed according to the system's internal state and current policy (a modiiable, probabilistic algorithm mapping environmental inputs and internal states to outputs and new internal states). An action is considered a learning algorithm if it can modify the policy. EEects of learning processes on later learning processes are measured using reward/time ratios. Occasional backtracking enforces success histories of still valid policy modiications corresponding to histories of lifelong reward accelerations. The principle allows for plugging in a wide variety of learning algorithms. In particular, it allows for embedding the learner's policy modiication strategy within the policy itself (self-reference). To demonstrate the principle's feasibility in cases where conventional reinforcement learning fails, we test it in complex, non-Markovian, changing environments (\POMDPs"). One of the tasks involves more than 10 13 states, two learners that both cooperate and compete, and strongly delayed reinforcement signals (initially separated by more than 300,000 time steps). The biggest diierence between time and space is that you can't reuse time.
منابع مشابه
Meta-learning Method for Automatic Selection of Algorithms for Text Classification
The paper presents a meta-learning approach for textual document classification task and an automatic selection of the best available algorithm for creation of classifiers. After brief introductory description of principles of document preprocessing, creation and evaluation of the classifiers, the metalearning approach is presented as a method for automatic selection of the most appropriate cla...
متن کاملKIMEL: A kernel incremental metalearning algorithm
The Kernel method is a powerful tool for extending an algorithm from linear to nonlinear case. Metalearning algorithm learns the base learning algorithm, thus to improve performance of the learning system. Usually, metalearning algorithms exhibit faster convergence rate and lower Mean-Square Error (MSE) than the corresponding base learning algorithms. In this paper, we present a kernelized meta...
متن کاملUsing Metalearning to Predict When Parameter Optimization Is Likely to Improve Classification Accuracy
Work on metalearning for algorithm selection has often been criticized because it mostly considers only the default parameter settings of the candidate base learning algorithms. Many have indeed argued that the choice of parameter values can have a significant impact on accuracy. Yet little empirical evidence exists to provide definitive support for that argument. Recent experiments do suggest ...
متن کاملPruning Meta-Classifiers in a Distributed Data Mining System
JAM is a powerful and portable agent-based distributed data mining system that employs metalearning techniques to integrate a number of independent classifiers (models) derived in parallel from independent and (possibly) inherently distributed databases. Although meta-learning promotes scalability and accuracy in a simple and straightforward manner, brute force metalearning techniques can resul...
متن کاملMultistage Neural Network Metalearning with Application to Foreign Exchange Rates Forecasting
In this study, we propose a multistage neural network metalearning technique for financial time series predication. First of all, an interval sampling technique is used to generate different training subsets. Based on the different training subsets, the different neural network models with different training subsets are then trained to formulate different base models. Subsequently, to improve t...
متن کامل